--- layout: post ---
Demonstrate understanding of the use of copulas as part of the process of modeling multivariate risks, including recommendation of an appropriate copula
ERM is interested in all the risk an org faces and ways they interact with each other
Module focus on the theory and application of copulas
Use of the techniques here to model specific types of risk will be covered in part 5
PDF and CDF
For any r.v. \(X\):
Both have a range of \([0,1]\)
Marginal PDF
\(P(X = x) = \sum \limits_y P(X=x, Y=y)\)
\(f_X(x) = \int \limits_y f_{X,Y}(x,y)dy\)
Conditional PDF
\(P(X=x \mid Y=y) = \dfrac{P(X=x,Y=y)}{P(Y=y)}\)
\(f_{X \mid Y = y}(x,y) = \dfrac{f_{X,Y}(x,y)}{f_Y(y)}\)
Expectation
\(\mathrm{E}[g(X,Y)] = \sum \limits_x \sum \limits_y g(x,y)P(X=x,Y=y)\)
\(\int \limits_y \int \limits_x g(x,y)f_{X,Y}(x,y)dxdy\)
Covariance
\(\mathrm{Cov}(X,Y) = \mathrm{E}[(X-\mathrm{E}(X))(Y - \mathrm{E}(Y))] = \mathrm{E}(XY) - \mathrm{E}(X)\mathrm{E}(Y)\)
Correlation
\(\mathrm{Corr}(X,Y) = \rho(X,Y) = \dfrac{\mathrm{Cov}(X,Y)}{\sqrt{\mathrm{Var}(X)\mathrm{Var}(Y)}}\)
Sums and Products of Moments
\(\mathrm{E}(X+Y) = \mathrm{E}(X) + \mathrm{E}(Y)\)
\(\mathrm{E}(XY) = \mathrm{E}(X)\mathrm{E}(Y) + \mathrm{Cov}(X,Y)\)
The above 2 equation are also true for functions \(g(X)\) and \(h(Y)\) of the r.v.
\(\mathrm{Var}(X+Y) = \mathrm{Var}(X) + \mathrm{Var}(Y) + 2\mathrm{Cov}(X, Y)\)
For ERM we need to model all the risks an org. faces and their inter-dependencies
One way to do so is with a joint distribution function for all the risk
\(P(X_i = x_i:i=1...N) = f_{X_1,X_2,...,X_N}(x_1, x_2,...,x_N)\)
Corresponding joint (cumulative) distribution functions (CDF):
\(P(X_i \leq x_i:i=1...N) = F_{X_1,X_2,...,X_N}(x_1, x_2,...,x_N)\)
Each of the org’s risk are represented as the marginal distribution in the context of joint distribution functions
\(f_{X_1}(x_1) = \int \limits_{x_2} \dots \int \limits_{x_N} f_{X_1,X_2,...,X_N}(x_1,x_2,...,x_N)dx_2dx_3...dx_N\)
The joint distribution will combine information from the marginal risk distribution with other information on the way in which the risks interrelate or depend on one another
The joint distribution expresses this dependence of interrelated factors on one another but it does so implicity (You can’t immediately see the nature of the interdependence by looking at the formula for the join distribution function)
Copula can reflect this interdependence of factors explicity
Copula (\(C\)):
Expresses a multivariate cumulative distribution functions in terms of the individual marginal cumulative distributions
\(P(X_i \leq x_i ; i = 1...N) = F_{X_1,...,X_N}(x_1,...,x_N) = C_{X_1,...,X_N}\left[F_{X_1}(x_1),...,F_{X_N}(x_N)\right]\)
Key Idea:
\(\left\{\left\{\begin{array}{c}\text{Marginal distribution of} \\ \text{each risk factor} \end{array}\right\} \text{combined with {Copula}} \right\} = \left\{\begin{array}{c}\text{Joint distribution of} \\ \text{risk factors} \end{array}\right\}\)
Can think of copula as a CDF in many dimensions. It takes marginal probabilities and combines them so as to produce a joint probability
N-dimension copula:
\(C(\mathbf{U}) = C(u_1,u_2,...,u_N) = P(U_1 \leq u_1,...,U_N \leq u_N)\)
Key beneftis
Given joint PDF: \(f_{X,Y}(x,y) = 6x^2y\) for \(0 < x\), \(y<1\)
Marginal PDF:
Marginal CDF:
Joint CDF:
\(F_{X,Y}(x,y) = \int \limits_{0}^x \int \limits_{0}^y 6t^2 s ds dt = \int \limits_0^x \left[6t^2 \dfrac{s^2}{2} \right]^y_0 dt = \int \limits_0^x 3t^2y^2 dt = \left[3 \dfrac{t^3}{3} y^2 \right]^x_0 = x^3 y^2\)
Copula corresponding to the joint PDF
\(u = F_X(x) = x^3 \Rightarrow x = u^{\frac{1}{3}}\)
\(u = F_Y(y) = y^2 \Rightarrow x = v^{\frac{1}{2}}\)
\(F_{X,Y}(x,y) = x^3y^2 = uv = C_{X,Y}[u,v]\)
The joint CDF can be described fully by \(C_{X,Y}[u,v]\) and the marginal distributions
Consider how copulas relate to probability distributions and review the concept of dependence
Properties of Copulas
Property 1
Increasing the range of values for the variables must increase the probability of observing a combination within that range
\(C(u_1,u_2,...,u_N)\) is an increasing function of each input variable
Property 2
If we “integrate out” all the other variables (by setting CDFs equal to the maximum value of 1 so as to include all possible values), we will just have the marginal distribution of variable \(i\)
\(C(1,...,1,u_i,1,...,1) = u_i\) for \(i=1,2,...,d\) and \(u_i \in [0,1]\)
Property 3
This property ensures that a valid probability (i.e. non-negative) is produced by the copula function for any valid combination of the parameters
For all \((a_1,...,a_N)\) and \((b_1,...,b_N)\) with \(0 \leq a_i \leq b_i \leq 1\):
\(\sum \limits_{i_1 = 1}^2 \sum \limits_{i_2 = 1}^2 \dots \sum \limits_{i_N = 1}^2 (-1)^{i_1+\dots+i_N} C(u_{1i_1},...,u_{Ni_N}) \geq 0\)
Example of property 3
Let \((a_1, a_2)\) and \((b_1, b_2)\) be values such that \(0\leq a_1 \leq b_1 \leq 1\) and \(0\leq a_2 \leq b_2 \leq 1\)
Then:
\(\begin{align} & \sum \limits_{i_1 = 1}^2 \sum \limits_{i_2 = 1}^2 (-1)^{i_1 + i_2}C(u_{1i_1},u_{2i_2}) \geq 0 \\ & \Rightarrow \sum \limits_{i_1 = 1}^2 \left((-1)^{i_1 +1}C(u_{1i_1},u_{21}) + (-1)^{i_1 +2} C(u_{1i_1},u_{22}) \right) \geq 0\\ & \Rightarrow (-1)^2 C(u_{11},u_{21}) + (-1)^3 C(u_{11},u_{22}) + (-1)^3 C(u_{12},u_{21}) + (-1)^4C(u_{12},u_{22}) \geq 0 \\ & \Rightarrow C(u_{11},u_{21}) - C(u_{11},u_{22}) - C(u_{12},u_{21}) + C(u_{12},u_{22}) \geq 0 \\ \end{align}\)
By definition, \(u_{11} = a_1\), \(u_{21} = a_2\), \(u_{12} = b_1\) and \(u_{22} = b_2\) so this requires that:
\(C(a_1,a_2) - C(a_1,b_2) - C(b_1,a_2) + C(b_1,b_2) \geq 0\)
The inequality is equivalent to saying that the rectangle shaded diagonally downwards in the following diagram always has positive probability
Now:
\(\begin{align} C(b_1, b_2) - C(a_1, b_2) &= P(U_1 \leq b_1, U_2 \leq b_2) - P(U_1 \leq a_1, U_2 \leq b_2) \\ &= P(a_1 \leq U_1 \leq b_1, U_2 \leq b_2)\\ \end{align}\)
And:
\(\begin{align} C(b_1, a_2) - C(a_1, a_2) &= P(U_1 \leq b_1, U_2 \leq a_2) - P(U_1 \leq a_1, U_2 \leq a_2) \\ &= P(a_1 \leq U_1 \leq b_1, U_2 \leq a_2)\\ \end{align}\)
Substituting into the inequality:
\(\begin{align} & P(a_1 \leq U_1 \leq b_1, U_2 \leq b_2) - P(a_1 \leq U_1 \leq b_1, U_2 \leq a_2) \geq 0 \\ & \Rightarrow P(a_1 \leq U_1 \leq b_1, a_2 \leq U_2 \leq b_2) \geq 0 \end{align}\)
Let \(F\) be a joint distribution function with marginal CDF \(F_1,...,F_N\)
\(\exists \: C : \forall \: x_1,...,x_N \in [-\infty, \infty]\)
\(F(x_1,...,x_N) = C(F_1(x_1),...,F_N(x_N))\)
Sklar’s theorem sates that if the marginal cumulative distributions are continuous, then \(C\) is unique
Conversely, if \(C\) is a copula and \(F_1,...,F_N\) are univariate CDF, then the function \(F\) (\(F(x_1,...,x_N) = C(F_1(x_1),...,F_N(x_N))\)) is a joint cumulative distribution function with marginal CDF \(F_1,...,F_N\)
Definition of the copula of a distribution
If the vector of r.v. \(X\) has joint CDF \(F\) with continuous marginal CDF \(F_1,...,F_N\), then the copula of the distribution \(F\) is the distribution function \(C(F_1(x_1),...,F_N(x_N))\)
Empirical copula function describes the relationship between the marginal variables based upon their respective ranks
Consider a series of joint observations \((X_t, Y_t)\) for \(t= 1,2,...,T\)
Method 1
Define:
\(\begin{align} F(x,y) &= \Pr(X_t \leq x, Y_t \leq y)\\ &= \dfrac{1}{1+T}\sum\limits_{s=1}^T I(X_s \leq x, Y_s \leq y) \\ \end{align}\)
In which case:
\(\dfrac{1}{1+T} \leq F(x,y) \leq dfrac{T}{1+T}\)
For sample with 99 values this would correspond to \(0.01 \leq F(x,y) \leq 0.99\)
Example
10 vectors of data \(\mathbf{X}_1,...,\mathbf{X}_{10}\)
\(\hat{F}(2.7,1,4) = \dfrac{1}{11} \times 3 = \dfrac{3}{11} = 0.273\)
Method 2
Apply a continuity correction and define:
\(\begin{align} F(x,y) &= \Pr(X_t \leq x, Y_t \leq y) \\ &= \dfrac{1}{T} \left[ \sum\limits_{s=1}^T I(X_s \leq x, Y_s \leq y) - \dfrac{1}{2} \right] \end{align}\)
In which case:
\(\dfrac{1}{2T} \leq F(x,y) \leq \dfrac{T - \frac{1}{2}}{T}\)
Example
\(\hat{F}(2.7,1,4) = \dfrac{1}{10}(3-0.5) = 0.25\)
For 2 variables \(X\) and \(Y\), key property of a copula:
\(F(x,y) = P[X\leq x, Y\leq y] = C[F_X(x), F_Y(y)]\)
For each copula there is a corresponding survival copula defined by the “opposite relationship”
\(\bar{F}(x,y) = P[X > x, Y > y] = \bar{C}[\bar{F}_X(x), \bar{F}_Y(y)]\)
So the survival copula expresses the joint survival probability in terms of the marginal survival probabilities
Relationship between the survival copulas and the ordinary copulas
\(\bar{C}(1-u,1-v) = 1 - u- v + C(u,v)\)
Derivation of the above:
\(P[X \leq x \text{ or } Y \leq y] = 1 - P[X > x, Y>y]\)
So: \(1 - P[X > x, Y>y] = P[X\leq x] + P[Y\leq y] - P[X\leq x, Y \leq y]\)
In terms of copula: \(1 - \bar{C}[\bar{F}_X(x), \bar{F}_Y(y)] = F_X(x) + F_Y(y) - C[F_X(x), F_Y(y)]\)
Rearranging:\(\bar{C}[\bar{F}_X(x), \bar{F}_Y(y)] = 1- F_X(x) - F_Y(y) + C[F_X(x), F_Y(y)]\)
And we get the above formula
The graphic below illustrate the relationship on a copula density plot in terms of two variables \(X_1\) and \(X_2\)
Copula density function describes the rate of change of the copula CDF
Calculated by partial differentiation w.r.t each of the variables
\(c(u_1,...,u_N) = \dfrac{\partial^N C(u_1,...,u_n)}{\partial u_1 ... \partial u_N}\)
If all distribution functions are continuous
\(c(u_1,...u_N) = \dfrac{f(x_1,...,x_N)}{f(x_1)f(x_2)...f(x_N)}\)
Identify at different forms of association (e.g. linear correlation with Pearson’s \(\rho\)) and use this categorization to select suitable potential candidate copulas from the list of established copulas (or develop bespoke copula function)
Concordance does not imply that one variable directly influences the other (i.e does not imply that one is dependent upon the other)
The linear and rank correlation measures (from mod 15) indicate concordance (or association) but do not imply dependence
Axioms for a good measure of concordance
Scarsini’s properties of a good measure of concordance, \(M_{X,Y}\), between 2 variables (\(X\) and \(Y\)) that are linked by a specified copula \(C(F_X(x),F_Y(y))\):
Properties above imply other properties of good measures of concordance:
Spearman’s \(\rho\) and Kendall’s \(\tau\) both satisfy these criteria for a good measure of concordance
Pearson’s \(\rho\) only fulfills all the criteria when all the marginal distributions are elliptical
Copulas can be used to describe the full relationship between the marginal distributions
Tail dependencies are of particular interest in RM as they describe joint concentrations of risk where they might be of particular concern (at the extremes of the marginal distributions)
Coefficient of the lower tail dependence:
\(\begin{align} _L\lambda_{X,Y} &= \lim \limits_{u \rightarrow 0^+} P\left(X \leq F_X^{-1}(u) \mid Y \leq F_Y^{-1}(u) \right) \\ &= \lim \limits_{u \rightarrow 0^+} \dfrac{C(u,u)}{u}\\ \end{align}\)
Coefficient of the upper tail dependence:
\(\begin{align} _U\lambda_{X,Y} &= \lim \limits_{u \rightarrow 1^-} P\left(X > F_X^{-1}(u) \mid Y > F_Y^{-1}(u) \right) \\ &= \lim \limits_{u \rightarrow 0^+} \dfrac{\bar{C}(u,u)}{u}\\ \end{align}\)
Visually, on a copula density plot:
Level of tail dependences exhibited by a particular set of data will help to indicate which copula(s) might be appropriate to consider fitting
Because each copula has a specific degree of tail dependence which may be parameterized
Derivation of the above formulas
\(\begin{align} _{L}\lambda_{X,Y} &= \lim \limits_{u \rightarrow 0^+} P\left(X \leq F_X^{-1}(u) \mid Y \leq F_Y^{-1}(u) \right) \\ &= \lim \limits_{u \rightarrow 0^+} \dfrac{P\left(X \leq F_X^{-1}(u), Y \leq F_Y^{-1}(u) \right)}{P(Y \leq F_Y^{-1}(u))} \\ &= \lim \limits_{u \rightarrow 0^+} \dfrac{P\left(F_X(X) \leq u, F_Y(Y) \leq u \right)}{P(F_Y(Y) \leq u)} \\ &= \lim \limits_{u \rightarrow 0^+} \dfrac{C(u,u)}{C(1,u)}\\ &= \lim \limits_{u \rightarrow 0^+} \dfrac{C(u,u)}{C(1,u)}\\ \end{align}\)
\(\begin{align} _{U}\lambda_{X,Y} &= \lim \limits_{u \rightarrow 1^-} P\left(X > F_X^{-1}(u) \mid Y > F_Y^{-1}(u) \right) \\ &= \lim \limits_{u \rightarrow 1^-} \dfrac{P\left(X > F_X^{-1}(u), Y > F_Y^{-1}(u) \right)}{P(Y > F_Y^{-1}(u))} \\ &= \lim \limits_{u \rightarrow 1^-} \dfrac{P\left(F_X(X) > u, F_Y(Y) > u \right)}{P(F_Y(Y) > u)} \\ &= \lim \limits_{u \rightarrow 1^-} \dfrac{C(1-u,1-u)}{1-u}\\ &= \lim \limits_{u \rightarrow 0^+} \dfrac{\bar{C}(u,u)}{C(u)}\\ \end{align}\)
The final equality above follows as a result of replace \(1-u\) with $u4 and noting that letting \(u \rightarrow 0^+\) in the limit of the new expression is the same as letting \(u \rightarrow 1^-\) in the previous expression
Fundamental Copulas
They represent the three basic dependencies that a set of variables can display
They can combined to form a wider family of copula functions called the Fréchet-Höffding family
Explict Copulas
They have simple closed-form expression
We will look at the general class of Archimedean copulas e.g. Clayton copula
Implicit Copulas
They are based on well-known multivariate distributions, but no simple closed-form expression exists
e.g. Gaussian copula (based on the normal distribution) and the t copula (based on t distribution)
Fundamental copulas:
Or product copula
\(\begin{align} _{ind}C(F_{X_1}(x_1),...,F_{X_N}(x_N)) &= _{ind}C(u_1,...,u_N) \\ &= \prod \limits_{i=1}^N u_N \\ &= \prod \limits_{i=1}^N F_{X_i}(x_i) \end{align}\)
The joint distribution is equal to the produce of the individual distribution functions
As the variables are independent, there is no upper or lower tail dependence
i.e. \(_{L}\lambda = _{U}\lambda = 0\)
Or minimum copula
\(\begin{align} _{min}C(F_{X_1}(x_1),...,F_{X_N}(x_N)) &=_{\mathrm{min}}C(u_1,...,u_N) \\ &= \mathrm{min}(u_1,...,y_N) \\ &= \mathrm{min}(F_{X_1}(x_1),...,F_{X_N}(x_N)) \\ \end{align}\)
Co-monotonicity copula represents the perfect positive dependence between variables
Or maximum copula
Only defined in a 2 dimensions:
\(\begin{align} _{max}C(F_{X_1}(x_1),F_{X_2}(x_2)) &=_{\mathrm{min}}C(u,v) \\ &= \mathrm{max}(u+v-1,0) \\ &= \mathrm{max}(F_{X_1}(x_1) + F_{X_2}(x_2)-1,0) \\ \end{align}\)
Counter-monotonoicity Copula represents perfect negative dependence between two variables
Tail dependency between the variables will only manifest itself when the variables are at opposite ends
\(\therefore\) \(_{L}\lambda = _{U}\lambda = 0\)
\(\mathrm{max}\left\{ \left( \sum \limits_{i=1}^N u_i \right) +1 -N, 0\right\} \leq C(u_1,u_2,...,u_N) \leq \mathrm{min}\{u1,u2,...,u_N\}\)
In the bivariate case the co-monotonicity and counter-monotonicity copulas represent the extremes of the possible levels of association between variables
\(\therefore\) they are the upper and lower bound for all copulas (aka the Fréchet-Höffding Bounds)
The fundamental copulas are specific cases of the general Fréchet-Höffding family of copulas which are of the form
\(\begin{align} _{F}C(F_{X_1}(x_1),...,F_{X_N}(x_N)) = & p \mathrm{max} \left(\left(\sum \limits_{i=1}^N F_{X_i}(x_i)\right)-1 ,0 \right) \\ &+ (1-p-q)\prod \limits{i=1}^N F_{X_i}(x_i) \\ &+ q \mathrm{min}(F_{X_1}(x_1),...,F_{X_N}(x_N))\\ \end{align}\)
In the bivariate case:
\(_{F}C(u,v) = p \mathrm{max}(u+v-1,0) + (1-p-q)uv + q min(u,v)\)
Mixture copula is defined when
(Appendix of the CMP has more discussions on the attainable correlations)
Note how the co-monotonicity copula represents an upper bound to copulas and the counter-monotonicity copula represents the lower bound
Important class of explicit copulas in closed form functions
A valid generator function:
\(\psi: [0,1] \rightarrow [0, \infty]\) is a continuous and strictly decreasing function on \([0,1]\) with \(\psi(1) = 0\) and \(\psi(0) \leq \infty\)
Pseudo-inverse* of \(\psi\) with the domain \([0,\infty]\)
\(\psi^{[-1]}(x) = \begin{cases} \psi^{-1}(x) & 0 \leq x \leq \psi(0) \\ 0 & \psi(0) < x \leq \infty \\ \end{cases}\)
Archimedean class copulas have form:
\(C(u_1,...,u_N) = \psi^{[-1]}\left( \sum \limits_{i=1}^N \psi(u_i) \right)\)
Where \(\psi\) is a valid generator function which is additionally convex, i.e. \(\dfrac{d^2}{dx^2}\psi(x) \geq 0\)
For bivariate: \(C(u,v) = \psi^{[-1]}(\psi(u)+ \psi(v))\)
The definition of the pseudo-inverse function ensures that \(\sum \limits_{i=1}^N \psi(u_i)\), the value of the Archimedean copula will be a valid probability (?)
Advantage
Limitations
Kendall’s \(\tau\) is a function of the parameters of the copula (for Archimedian copulas)
For single parameter form:
For multiple parameters form:
(See copulas summary in appendix)
Generator function:
\(_{Gu} \phi _{\alpha}(u) = (-\ln u)^{\alpha}\) for \(1 \leq \alpha > \infty\)
Gumbel copula from generator function:
Find inverse solve for \(u\):
\(\begin{align} k &= (- \ln(u))^{\alpha} \\ -k^{\frac{1}{\alpha}} &= \ln (u) \\ u &= \exp \left( -k^{\frac{1}{\alpha}} \right) \\ \psi^{-1}(u) &= \exp \left( -u^{\frac{1}{\alpha}} \right) \\ \end{align}\)
Plug in \(C(u_1,...,u_N) = \psi^{[-1]}\left( \sum \limits_{i=1}^N \psi(u_i) \right)\):
\(\begin{align} _{Gu}C_{\alpha}(u_1,...,u_N) &= \psi^{[-1]}\left( \sum \limits_{i=1}^N \psi(u_i) \right) \\ &= \psi^{[-1]}\left( \sum \limits_{i=1}^N (-\ln(u_i))^{\alpha} \right) \\ &= \exp \left[ -\left( \sum \limits_{i=1}^N (-\ln(u_i))^{\alpha} \right)^{1/\alpha}\right] \\ \end{align}\)
e.g. bivariate case: \(_{Gu}C_{\alpha}(u,v) = \exp \left[ - \left((-\ln(u))^{\alpha} + (- \ln(v))^{\alpha} \right)^{1/\alpha}\right]\)
Special cases of the Gumbel:
Application of Gumbel
The upper tail dependency (but no lower tail dependency) of the Gumbel makes it suitable for modeling situations where associations increase for extreme high values (but not for extreme low values)
Generator function:
\(_{Fr}\psi_{\alpha}(u) = - \ln \left( \dfrac{e^{-\alpha i} - 1}{e^{-\alpha} -1}\right)\) for \(\alpha \in \mathbb{R}\)
Multivariate Frank Copula:
\(_{Fr}C_{\alpha}(u_1,...,u_N) = - \dfrac{1}{\alpha} \ln \left( 1 + \dfrac{\prod \limits_{i=1}^N (e^{-\alpha u_i}-1)}{(e^{-\alpha}-1)^{N-1}} \right)\)
If \(N>\) then this is only defined for \(\alpha > 0\)
Bivariate case:
\(_{Fr}C_{\alpha}(u,v) = - \dfrac{1}{\alpha} \ln \left( 1 + \dfrac{ (e^{-\alpha u}-1)(e^{-\alpha v}-1)}{(e^{-\alpha}-1)} \right)\)
Special cases of the multidimensional Frank:
Application of Frank
Generator function
\(_{Cl}\psi_{\alpha}(u) = \dfrac{1}{\alpha}(u^-{\alpha} - 1)\)
Multivariate Clayton copula
\(_{Cl}C_{\alpha}(u_1,...,u_N)=\left( \left\{ \sum \limits_{i=1}^N u_i^{-\alpha} \right\} -N +1 \right)^{-\frac{1}{\alpha}}\)
Bivariate case
\(_{Cl}C_{\alpha}(u,v) = \left(u^{-\alpha} + v^{-\alpha}-1 \right)^{\frac{1}{\alpha}\)
Application of Clayton
Generator function
\(_{GC}\psi_{\alpha,\beta}(u) = \dfrac{1}{\alpha^{\beta}}(u^{-\alpha}-1)^{\beta}\) for \(\alpha \geq 0\) \(\beta \geq 1\)
Multivariate generalized Clayton
\(_{GC}C_{\alpha,\beta}(u_1,...,u_N) = \left( \left\{ \sum \limits_{i=1}^N \left[ (u_i^{-\alpha} - 1)^{\beta} \right] \right}^{\frac{1}{\beta}} \right)^{-\frac{1}{\alpha}}\)
Bivariate case
\(_{GC}C_{\alpha,\beta}(u,v) = \left( \left\{ (u^{-\alpha} - 1)^{\beta} + (v^{-\alpha} - 1)^{\beta} \right}^{\frac{1}{\beta}} \right)^{-\frac{1}{\alpha}}\)
Special case
Application of generalized Clayton
Appendix provides precise information on the dependence measure of the Archimedean copulas
Important to note that the degree of tail dependency (where it exists) is a function of the parameter(s) of the copula
Difficult to see the differences between the example distribution functions above
Density functions below are more helpful
Note that the level of tail dependency is determined by a parameter \(\alpha\)
Let \(\mathbf{X}\) be a vector of \(N\) standard normal r.v. \(\therefore\) \(\mathbf{X}\) has multivariate normal distribution \(N(\mathbf{0,R})\), where \(\mathbf{R}\) is the correlation matrix of the individual r.v.
Noraml Copula
\(\begin{align} _{Ga}C_{\mathbf{R}}(\mathbf{u}) &= P(\Phi (X_1) \leq u_1,...,\Phi (X_N) \leq u_N) \\ &= P(X_1 \leq \Phi^{-1}(u_1),...,X_N \leq \Phi^{-1}(u_N)) \\ &= \boldsymbol{\Phi}_{\mathbf{R}}(\Phi^{-1}(u_1),...,\Phi^{-1}(u_N)) \\ \end{align}\)
In general a correlation matrix will be
Special cases
\(\mathbf{R} = \mathbf{I}_N\) (\(N\)-dimensional identity matrix)
\(\mathbf{R}\) is a matrix consisting entirely of 1s
Bivariate case
\(_{Ga}C_{\rho}(u,v) = \Phi_{\rho}(\Phi^{-1}(u),\Phi^{-1}(v))\)
Special case for the bivariate case (\(N=2\)) with \(\rho = -1\)
In 2-D the Gaussian copula allows the joint distribution to reflect any dependence between the variables from perfect (+) to (-) dependence depending on the \(\rho\) \(\Rightarrow\) Gaussian is a comprehensive copula (very versatile)
Gaussian copula can be described as an integral:
\(_{Ga}C_{\rho}(u_1, u_2) = \dfrac{1}{2 \pi \sqrt{1-\rho^2}} \int \limits_{-\infty}^{\Phi^{-1}(u_1)} \int \limits_{-\infty}^{\Phi^{-1}(u_2)} e^{-z} ds dt\)
Nature of tail dependencies
In 2-D: if \(\mid \rho \mid < 1\) then the Gaussian copula has zero tail dependencies
Disadvantage
Overcoming Normal’s Disadvantage
Let \(\mathbf{X}\) be a vector of \(N\) r.v. taken from a multivariate t with \(\gamma\) d.f., 0 mean and correlation matrix \(\mathbf{R}\)
Then t-copula:
\(_t C_{\gamma, \mathbf{R}}(\mathbf{u}) = t_{\gamma, \mathbf{R}}(t_{\gamma}^{-1}(u_1),...,t_{\gamma}^{-1}(u_N))\)
Special cases
If \(\mathbf{R} = \mathbf{I}_N\) (N-dimensional identity matrix)
If \(\mathbf{R}\) is a matrix consisting entirely of 1s
\(\gamma = \infty\)
Bivariate case
\(_t C_{\gamma,\rho}(u_1, u_2) = \dfrac{1}{2 \pi \sqrt{1-\rho^2}} \int \limits_{-\infty}^{t_{\gamma}^{-1}(u_1)} \int \limits_{-\infty}^{t_{\gamma}^{-1}(u_2)} \left(1 + \dfrac{s^2 +t^2 - 2\rho s t}{2\gamma(1-\rho^2)} \right)^{\frac{\gamma + 2}{2}} ds dt\)
Nature of tail dependencies
Similarly, it is easier to see the difference with the density functions below